解释性学者通过手动采样文档,应用代码以及将代码精炼和整理成类别,直到出现有意义的主题,从而从文本语料库中产生知识。鉴于大量的语料库,机器学习可以帮助扩展此数据采样和分析,但先前的研究表明,专家通常关注算法可能破坏或推动解释性奖学金。我们采用以人为本的设计方法来解决围绕机器辅助解释性研究的关注,以构建学术研究,该研究将机器中的集群算法纳入了脚手架解释性文本分析。随着学者将代码应用于文档和完善它们,所得编码的模式用作结构化元数据,该元数据限制了从语料库推断出的层次文档和单词簇。这些集群的交互式可视化可以帮助学者们战略性地对文档进行进一步的洞察力进行洞察力。 Scholastic证明了采用熟悉隐喻的以人为中心的算法设计和可视化如何通过交互式主题建模和文档群集来支持归纳和解释性研究方法。
translated by 谷歌翻译
Transformer-based models have gained large popularity and demonstrated promising results in long-term time-series forecasting in recent years. In addition to learning attention in time domain, recent works also explore learning attention in frequency domains (e.g., Fourier domain, wavelet domain), given that seasonal patterns can be better captured in these domains. In this work, we seek to understand the relationships between attention models in different time and frequency domains. Theoretically, we show that attention models in different domains are equivalent under linear conditions (i.e., linear kernel to attention scores). Empirically, we analyze how attention models of different domains show different behaviors through various synthetic experiments with seasonality, trend and noise, with emphasis on the role of softmax operation therein. Both these theoretical and empirical analyses motivate us to propose a new method: TDformer (Trend Decomposition Transformer), that first applies seasonal-trend decomposition, and then additively combines an MLP which predicts the trend component with Fourier attention which predicts the seasonal component to obtain the final prediction. Extensive experiments on benchmark time-series forecasting datasets demonstrate that TDformer achieves state-of-the-art performance against existing attention-based models.
translated by 谷歌翻译
Boundary conditions (BCs) are important groups of physics-enforced constraints that are necessary for solutions of Partial Differential Equations (PDEs) to satisfy at specific spatial locations. These constraints carry important physical meaning, and guarantee the existence and the uniqueness of the PDE solution. Current neural-network based approaches that aim to solve PDEs rely only on training data to help the model learn BCs implicitly. There is no guarantee of BC satisfaction by these models during evaluation. In this work, we propose Boundary enforcing Operator Network (BOON) that enables the BC satisfaction of neural operators by making structural changes to the operator kernel. We provide our refinement procedure, and demonstrate the satisfaction of physics-based BCs, e.g. Dirichlet, Neumann, and periodic by the solutions obtained by BOON. Numerical experiments based on multiple PDEs with a wide variety of applications indicate that the proposed approach ensures satisfaction of BCs, and leads to more accurate solutions over the entire domain. The proposed correction method exhibits a (2X-20X) improvement over a given operator model in relative $L^2$ error (0.000084 relative $L^2$ error for Burgers' equation).
translated by 谷歌翻译
This chapter sheds light on the synaptic organization of the brain from the perspective of computational neuroscience. It provides an introductory overview on how to account for empirical data in mathematical models, implement them in software, and perform simulations reflecting experiments. This path is demonstrated with respect to four key aspects of synaptic signaling: the connectivity of brain networks, synaptic transmission, synaptic plasticity, and the heterogeneity across synapses. Each step and aspect of the modeling and simulation workflow comes with its own challenges and pitfalls, which are highlighted and addressed in detail.
translated by 谷歌翻译
We introduce a challenging decision-making task that we call active acquisition for multimodal temporal data (A2MT). In many real-world scenarios, input features are not readily available at test time and must instead be acquired at significant cost. With A2MT, we aim to learn agents that actively select which modalities of an input to acquire, trading off acquisition cost and predictive performance. A2MT extends a previous task called active feature acquisition to temporal decision making about high-dimensional inputs. Further, we propose a method based on the Perceiver IO architecture to address A2MT in practice. Our agents are able to solve a novel synthetic scenario requiring practically relevant cross-modal reasoning skills. On two large-scale, real-world datasets, Kinetics-700 and AudioSet, our agents successfully learn cost-reactive acquisition behavior. However, an ablation reveals they are unable to learn to learn adaptive acquisition strategies, emphasizing the difficulty of the task even for state-of-the-art models. Applications of A2MT may be impactful in domains like medicine, robotics, or finance, where modalities differ in acquisition cost and informativeness.
translated by 谷歌翻译
医学图像分割模型的性能指标用于衡量参考注释和预测之间的一致性。在开发此类模型中,使用了一组通用指标,以使结果更具可比性。但是,公共数据集中的分布与临床实践中遇到的案例之间存在不匹配。许多常见的指标无法衡量这种不匹配的影响,尤其是对于包含不确定,小或空参考注释的临床数据集。因此,可能无法通过此类指标来验证模型在临床上有意义的一致性。评估临床价值的维度包括独立于参考注释量的大小,考虑参考注释的不确定性,体积计和/或位置一致性的奖励以及对空参考注释正确分类的奖励。与普通的公共数据集不同,我们的内部数据集更具代表性。它包含不确定的,小或空的参考注释。我们研究了有关深度学习框架的预测的公开度量指标,以确定哪些设置共同指标可提供有意义的结果。我们将公共基准数据集进行比较而没有不确定,小或空参考注释。该代码将发布。
translated by 谷歌翻译
本文讨论了创建和分析用于数据挖掘和文本分析研究的新数据集,这为利兹大学国家方言语料库的联合研究项目做出了贡献。该报告调查了机器学习分类器,以对各个法语国家的法语方言文本进行分类。遵循CRISP-DM方法的步骤,本报告探讨了数据收集过程,数据质量问题和数据转换以进行文本分析。最后,在应用了合适的数据挖掘技术之后,讨论了评估方法,最佳总体特征以及分类器和结论。
translated by 谷歌翻译
COVID-19的大流行造成了毁灭性的经济和社会破坏,使全球医疗机构的资源紧张。这导致全国范围内呼吁模型预测Covid-19患者的住院和严重疾病,以告知有限医疗资源的分配。我们回应针对儿科人群的其中一种。为了应对这一挑战,我们使用电子健康记录研究了针对儿科人群的两项预测任务:1)预测哪些儿童更有可能住院,而2)在住院儿童中,哪些孩子更有可能出现严重的症状。我们通过新颖的机器学习模型MEDML应对国家儿科Covid-19数据挑战。 MEDML根据超过600万个医学概念的医学知识和倾向得分提取了最预测的特征,并通过图神经网络(GNN)结合了异质医学特征之间的功能间关系。我们使用来自国家队列协作(N3C)数据集的数据评估了143,605名患者的MEDML,并在143,605名患者的住院预测任务中评估了严重性预测任务的11,465名患者。我们还报告了详细的小组级和个人级特征的重要性分析,以评估模型的解释性。与最佳的基线机器学习模型相比,MEDML的AUROC得分高达7%,AUPRC得分高达14%,并且自大流行以来的所有九个国家地理区域以及所有三个月的跨度都表现良好。我们的跨学科研究团队开发了一种将临床领域知识纳入新型机器学习模型的框架的方法,该框架比当前最新的数据驱动的功能选择方法更具预测性和可解释。
translated by 谷歌翻译
近年来,人类面孔的影子化化身已经走了很长一段路,但是该地区的研究受到缺乏公开可用的高质量数据集的限制。在这项工作中,我们介绍了Multiface,这是一种新的多视图,高分辨率的人脸数据集,该数据集是从13个身份的神经面部渲染研究中收集的13个身份。我们介绍了Mugsy,这是一种大型多摄像机设备,可捕获面部表现的高分辨率同步视频。 Multiface的目的是缩小学术界高质量数据的可访问性的差距,并使VR触觉研究能够进行研究。随着数据集的释放,我们对不同模型体系结构对模型的新观点和表达式的插值能力进行消融研究。通过有条件的VAE模型作为我们的基线,我们发现添加空间偏见,纹理翘曲场和残差连接可改善新型视图合成的性能。我们的代码和数据可在以下网址获得:https://github.com/facebookresearch/multiface
translated by 谷歌翻译
A typical desideratum for quantifying the uncertainty from a classification model as a prediction set is class-conditional singleton set calibration. That is, such sets should map to the output of well-calibrated selective classifiers, matching the observed frequencies of similar instances. Recent works proposing adaptive and localized conformal p-values for deep networks do not guarantee this behavior, nor do they achieve it empirically. Instead, we use the strong signals for prediction reliability from KNN-based approximations of Transformer networks to construct data-driven partitions for Mondrian Conformal Predictors, which are treated as weak selective classifiers that are then calibrated via a new Inductive Venn Predictor, the Venn-ADMIT Predictor. The resulting selective classifiers are well-calibrated, in a conservative but practically useful sense for a given threshold. They are inherently robust to changes in the proportions of the data partitions, and straightforward conservative heuristics provide additional robustness to covariate shifts. We compare and contrast to the quantities produced by recent Conformal Predictors on several representative and challenging natural language processing classification tasks, including class-imbalanced and distribution-shifted settings.
translated by 谷歌翻译